8 research outputs found
Guided Depth Super-Resolution by Deep Anisotropic Diffusion
Performing super-resolution of a depth image using the guidance from an RGB
image is a problem that concerns several fields, such as robotics, medical
imaging, and remote sensing. While deep learning methods have achieved good
results in this problem, recent work highlighted the value of combining modern
methods with more formal frameworks. In this work, we propose a novel approach
which combines guided anisotropic diffusion with a deep convolutional network
and advances the state of the art for guided depth super-resolution. The edge
transferring/enhancing properties of the diffusion are boosted by the
contextual reasoning capabilities of modern networks, and a strict adjustment
step guarantees perfect adherence to the source image. We achieve unprecedented
results in three commonly used benchmarks for guided depth super-resolution.
The performance gain compared to other methods is the largest at larger scales,
such as x32 scaling. Code for the proposed method will be made available to
promote reproducibility of our results
Crop Classification Under Varying Cloud Cover With Neural Ordinary Differential Equations
Optical satellite sensors cannot see the earth’s surface through clouds. Despite the periodic revisit cycle, image sequences acquired by earth observation satellites are, therefore, irregularly sampled in time. State-of-the-art methods for crop classification (and other time-series analysis tasks) rely on techniques that implicitly assume regular temporal spacing between observations, such as recurrent neural networks (RNNs). We propose to use neural ordinary differential equations (NODEs) in combination with RNNs to classify crop types in irregularly spaced image sequences. The resulting ODE-RNN models consist of two steps: an update step, where a recurrent unit assimilates new input data into the model’s hidden state, and a prediction step, in which NODE propagates the hidden state until the next observation arrives. The prediction step is based on a continuous representation of the latent dynamics, which has several advantages. At the conceptual level, it is a more natural way to describe the mechanisms that govern the phenological cycle. From a practical point of view, it makes it possible to sample the system state at arbitrary points in time such that one can integrate observations whenever they are available and extrapolate beyond the last observation. Our experiments show that ODE-RNN, indeed, improves classification accuracy over common baselines, such as LSTM, GRU, temporal convolutional network, and transformer. The gains are most prominent in the challenging scenario where only few observations are available (i.e., frequent cloud cover). Moreover, we show that the ability to extrapolate translates to better classification performance early in the season, which is important for forecasting
Fine-grained Population Mapping from Coarse Census Counts and Open Geodata
Fine-grained population maps are needed in several domains, like urban
planning, environmental monitoring, public health, and humanitarian operations.
Unfortunately, in many countries only aggregate census counts over large
spatial units are collected, moreover, these are not always up-to-date. We
present POMELO, a deep learning model that employs coarse census counts and
open geodata to estimate fine-grained population maps with 100m ground sampling
distance. Moreover, the model can also estimate population numbers when no
census counts at all are available, by generalizing across countries. In a
series of experiments for several countries in sub-Saharan Africa, the maps
produced with POMELOare in good agreement with the most detailed available
reference counts: disaggregation of coarse census counts reaches R2 values of
85-89%; unconstrained prediction in the absence of any counts reaches 48-69%
Forecasting Urban Development from Satellite Images
Forecasting where and when new buildings will emerge is a rather unexplored
niche topic, but relevant in disciplines such as urban planning, agriculture,
resource management, and even autonomous flight. In this work, we present a
method that accomplishes this task using satellite images and a custom neural
network training procedure. In stage A, a DeepLapv3+ backbone is pretrained
through a Siamese network architecture aimed at solving a building change
detection task. In stage B, we transfer the backbone into a change forecasting
model that relies solely on the initial input image. We also transfer the
backbone into a forecasting model predicting the correct time range of the
future change. For our experiments, we use the SpaceNet7 dataset with 960 km2
spatial extension and 24 monthly frames. We found that our training strategy
consistently outperforms the traditional pretraining on the ImageNet dataset.
Especially with longer forecasting ranges of 24 months, we observe F1 scores of
24% instead of 16%. Furthermore, we found that our method performed well in
forecasting the times of future building constructions. Hereby, the strengths
of our custom pretraining become especially apparent when we increase the
difficulty of the task by predicting finer time windows.Comment: 7 Pages short-paper, Master Thesis, 202
Fine-grained Population Maps for Tanzania and Zambia
Official dataset to paper "Fine-grained Population Mapping from Coarse Census Counts and Open Geodata".</p
Urban Change Forecasting from Satellite Images
Forecasting where and when new buildings will emerge is a rather unexplored topic, but one that is very useful in many disciplines such as urban planning, agriculture, resource management, and even autonomous flying. In the present work, we present a method that accomplishes this task with a deep neural network and a custom pretraining procedure. In Stage 1, a U-Net backbone is pretrained within a Siamese network architecture that aims to solve a (building) change detection task. In Stage 2, the backbone is repurposed to forecast the emergence of new buildings based solely on one image acquired before its construction. Furthermore, we also present a model that forecasts the time range within which the change will occur. We validate our approach using the SpaceNet7 dataset, which covers an area of 960 km² at 24 points in time across 2 years. In our experiments, we found that our proposed pretraining method consistently outperforms the traditional pretraining using the ImageNet dataset. We also show that it is to some degree possible to predict in advance when building changes will occur.ISSN:2512-2819ISSN:2512-278
Fine-grained population mapping from coarse census counts and open geodata
Fine-grained population maps are needed in several domains, like urban planning, environmental monitoring, public health, and humanitarian operations. Unfortunately, in many countries only aggregate census counts over large spatial units are collected, moreover, these are not always up-to-date. We present Pomelo, a deep learning model that employs coarse census counts and open geodata to estimate fine-grained population maps with 100m ground sampling distance. Moreover, the model can also estimate population numbers when no census counts at all are available, by generalizing across countries. In a series of experiments for several countries in sub-Saharan Africa, the maps produced with Pomelo are in good agreement with the most detailed available reference counts: disaggregation of coarse census counts reaches R2 values of 85–89%; unconstrained prediction in the absence of any counts reaches 48–69%.ISSN:2045-232